70+ pages of in-depth research on your AWS, GCP, and Azure options. All for free.
Get the reportWhile many companies have long since moved their application data and analytics data into the cloud, operational data has lagged a bit behind.
There are a variety of reasons for this, but a primary one is that operational data is often both sensitive (containing PII) and mission-critical. Many companies are hesitant to fix something that, from their perspective, has not been broken, and hesitant to put critical data into the hands of people outside the company.
This mindset is changing fast, though. While the old model may not be fully broken for everyone, it is breaking. For one thing, in 2022 the old way has simply become too expensive. And managed cloud database options are maturing, addressing some of the performance and security concerns that early adopters once had about them.
The past couple of years, and 2022 in particular, has seen a large shift in the balance of supply and demand in the labor market for specialized experts like SREs (site reliability engineers) and DBAs that are needed to set up, operate, and maintain on-prem systems. These experts have become very difficult and very expensive to hire.
Why? In 2021, the market saw a massive increase in VC funding. The number of unicorns (companies valued at over $1 billion) minted by venture investors jumped roughly 4x in 2021. “We stopped calling it a record year for venture capital in 2021 because it didn’t even do justice to what was going on,” PitchBook senior VC analyst Kyle Stanford told CNBC.
That meant that in addition to the Fortune 2000, who have been hiring database experts for decades, there were suddenly a record number of banked-up startups looking to make those same hires. But good SREs and DevOps engineers (for example) don’t grow on trees, and they can’t be created overnight — these are roles that require a lot of specialized expertise, and tons of experience.
With record-high demand for these experts but no meaningful change in the number of them available on the job market, the cost of hiring them increased dramatically. The difficulty of hiring them also went way up – even companies that could match their wage demands have found hiring difficult because there simply aren’t that many viable candidates out there to hire.
At the same time, companies are increasingly looking to operate complex multi-region deployments that can meet specific security requirements — and building a viable on-prem system is a challenge, even with the right hires. Increasingly, decision makers are asking themselves: is this challenge worth it?
Managed cloud solutions such as CockroachDB Dedicated come with the advantage of not having to make those hires while still having your database managed by the most qualified experts in the business – because who understands the database better than the people who built it? In 2022, the end result is often a database that’s managed better and costs less than operating an on-prem setup.
Moving to a managed cloud solution also typically increases flexibility and adaptability. Say, for example, a company wants to scale up its global operations. With a managed cloud database such as CockroachDB, this can be as simple as pressing a button to add new nodes across the globe – your database is scaled up within seconds. With an on-prem setup, even if you have the money to do all of the hiring and infrastructure spending that’ll be needed for the scale up, hiring takes time.
Especially when labor market conditions are making it nearly impossible to find qualified candidates, moving to a managed cloud database can be a faster, better, and more affordable solution.
Although managed cloud solutions are looking increasingly affordable, that doesn’t mean that understanding the total cost of ownership for a managed cloud instance is easy. To shed more light on this question, we looked at the total cost of operating a cloud database across the three major public clouds, AWS, GCP, and Azure in the 2022 Cloud Report.
While each cloud has specific charges associated with the particular instance type you choose for operating your database, we found that even for relatively small amounts of storage, the cost of running a particular OLTP workload is much more influenced by the cost of that storage than it is the cost of the instance itself.
Across all three clouds, storage accounted for roughly 30% of the total sum we spent on a given instance if we selected general-purpose storage volumes (pd-ssd on GCP, gp3 on AWS, premium-disk on Azure). If we selected high-performance storage volumes (pd-extreme on GCP, io2 on AWS, ultra-disk on Azure), that number jumped to nearly 70% of the total instance cost for all three clouds.
We also found that data transfer costs had an outsized impact on the total cost of operating a cloud database instance.
These costs — particularly the cost of high-performance storage — are significant. However, our results suggest that most companies can safely opt for the lower-cost general-purpose storage without a significant drop in performance. Among the instance types we tested this year, we did not find a single case where the most cost-effective choice was to use top-tier storage for any of the OLTP benchmarks. High-performance options may still be necessary for specific workloads that require very high IOPS or very low storage latency, though.
For additional details and information that’ll help you choose the most cost-effective cloud and instance for your OLTP workloads, check out the full 2022 Cloud Report — it’s free!
Trying to choose between the three main public cloud providers – AWS, GCP, and Azure – isn’t easy. There are a lot of …
Read moreThe 2022 Cloud Report represents our deepest dive yet into how the three public clouds – AWS, GCP, and Azure – perform …
Read moreOne of the most interesting questions we get to dig into in our annual Cloud Report is which CPUs offer the best …
Read more